In the field of antibody engineering, an essential task is to design a novel antibody whose paratopes bind to a specific antigen with correct epitopes. Understanding antibody structure and its paratope can facilitate a mechanistic understanding of its function. Therefore, antibody structure prediction from its sequence alone has always been a highly valuable problem for de novo antibody design. AlphaFold2, a breakthrough in the field of structural biology, provides a solution to predict protein structure based on protein sequences and computationally expensive coevolutionary multiple sequence alignments (MSAs). However, the computational efficiency and undesirable prediction accuracy of antibodies, especially on the complementarity-determining regions (CDRs) of antibodies limit their applications in the industrially high-throughput drug design. To learn an informative representation of antibodies, we employed a deep antibody language model (ALM) on curated sequences from the observed antibody space database via a transformer model. We also developed a novel model named xTrimoABFold to predict antibody structure from antibody sequence based on the pretrained ALM as well as efficient evoformers and structural modules. The model was trained end-to-end on the antibody structures in PDB by minimizing the ensemble loss of domain-specific focal loss on CDR and the frame-aligned point loss. xTrimoABFold outperforms AlphaFold2 and other protein language model based SOTAs, e.g., OmegaFold, HelixFold-Single, and IgFold with a large significant margin (30+\% improvement on RMSD) while performing 151 times faster than AlphaFold2. To the best of our knowledge, xTrimoABFold achieved state-of-the-art antibody structure prediction. Its improvement in both accuracy and efficiency makes it a valuable tool for de novo antibody design and could make further improvements in immuno-theory.
translated by 谷歌翻译
In deep neural networks (DNNs), there are a huge number of weights and multiply-and-accumulate (MAC) operations. Accordingly, it is challenging to apply DNNs on resource-constrained platforms, e.g., mobile phones. Quantization is a method to reduce the size and the computational complexity of DNNs. Existing quantization methods either require hardware overhead to achieve a non-uniform quantization or focus on model-wise and layer-wise uniform quantization, which are not as fine-grained as filter-wise quantization. In this paper, we propose a class-based quantization method to determine the minimum number of quantization bits for each filter or neuron in DNNs individually. In the proposed method, the importance score of each filter or neuron with respect to the number of classes in the dataset is first evaluated. The larger the score is, the more important the filter or neuron is and thus the larger the number of quantization bits should be. Afterwards, a search algorithm is adopted to exploit the different importance of filters and neurons to determine the number of quantization bits of each filter or neuron. Experimental results demonstrate that the proposed method can maintain the inference accuracy with low bit-width quantization. Given the same number of quantization bits, the proposed method can also achieve a better inference accuracy than the existing methods.
translated by 谷歌翻译
Deep neural networks (DNNs) have successfully been applied in many fields in the past decades. However, the increasing number of multiply-and-accumulate (MAC) operations in DNNs prevents their application in resource-constrained and resource-varying platforms, e.g., mobile phones and autonomous vehicles. In such platforms, neural networks need to provide acceptable results quickly and the accuracy of the results should be able to be enhanced dynamically according to the computational resources available in the computing system. To address these challenges, we propose a design framework called SteppingNet. SteppingNet constructs a series of subnets whose accuracy is incrementally enhanced as more MAC operations become available. Therefore, this design allows a trade-off between accuracy and latency. In addition, the larger subnets in SteppingNet are built upon smaller subnets, so that the results of the latter can directly be reused in the former without recomputation. This property allows SteppingNet to decide on-the-fly whether to enhance the inference accuracy by executing further MAC operations. Experimental results demonstrate that SteppingNet provides an effective incremental accuracy improvement and its inference accuracy consistently outperforms the state-of-the-art work under the same limit of computational resources.
translated by 谷歌翻译
尽管人工智能(AI)在理解各个领域的分子方面取得了重大进展,但现有模型通常从单个分子模态中获得单个认知能力。由于分子知识的层次结构是深刻的,即使人类也从不同的方式中学习,包括直觉图和专业文本,以帮助他们的理解。受到这一点的启发,我们提出了一个分子多模式基础模型,该模型是从分子图及其语义相关的文本数据(从发表的科学引用索引论文中爬立)的。该AI模型代表了直接桥接分子图和自然语言的关键尝试。重要的是,通过捕获两种方式的特定和互补信息,我们提出的模型可以更好地掌握分子专业知识。实验结果表明,我们的模型不仅在诸如跨模式检索和分子标题之类的跨模式任务中表现出有希望的性能,而且还可以增强分子属性预测,并具有从自然语言描述中产生有意义的分子图的能力。我们认为,我们的模型将对跨生物学,化学,材料,环境和医学等学科的AI能力领域产生广泛的影响。
translated by 谷歌翻译
在许多机器学习应用中已经显示了歧视,该应用程序要求在与道德相关的领域(例如面部识别,医学诊断和刑事判决)中部署之前进行足够的公平测试。现有的公平测试方法主要设计用于识别个人歧视,即对个人的歧视。然而,作为另一种广泛的歧视类型,对群体歧视(大多数隐藏)的测试却少得多。为了解决差距,在这项工作中,我们提出了测试,一种可解释的测试方法,它系统地识别和措施隐藏了一个神经网络的隐藏(我们称为“微妙”群体歧视},该神经网络的特征是敏感特征的条件。一个神经网络,TestsgDFirst自动生成可解释的规则集,该规则集将输入空间分为两组,以暴露模型的组歧视。鉴于,Testsgdalso提供了基于对输入空间进行采样的估计组公平得分,以衡量确定的SIXTEL组歧视程度,这可以确保准确地达到错误的限制。我们评估了在包括结构化数据和文本数据在内的流行数据集中训练的测试多个神经网络模型。实验结果表明,测试有效地有效地识别和测量了如此微妙的群体歧视,以至于该测试效率以前从未透露过。矿石,我们表明,测试的测试结果指南生成新样品的测试结果,以通过可忽略不计的准确性下降来减轻这种歧视。
translated by 谷歌翻译
流行的图神经网络模型在图表学习方面取得了重大进展。但是,在本文中,我们发现了一个不断被忽视的现象:用完整图测试的预训练的图表学习模型的表现不佳,该模型用良好的图表测试。该观察结果表明,图中存在混杂因素,这可能会干扰模型学习语义信息,而当前的图表表示方法并未消除其影响。为了解决这个问题,我们建议强大的因果图表示学习(RCGRL)学习可靠的图形表示,以防止混杂效应。 RCGRL引入了一种主动方法,可以在无条件的力矩限制下生成仪器变量,该方法使图表学习模型能够消除混杂因素,从而捕获与下游预测有因果关系的歧视性信息。我们提供定理和证明,以保证拟议方法的理论有效性。从经验上讲,我们对合成数据集和多个基准数据集进行了广泛的实验。结果表明,与最先进的方法相比,RCGRL实现了更好的预测性能和泛化能力。
translated by 谷歌翻译
神经网络在广泛的应用中具有明显的成就。广泛的采用也引起了人们对它们的可靠性和可靠性的关注。与传统的决策计划类似,神经网络可以具有需要修复的缺陷。这些缺陷可能会导致不安全的行为,提高安全问题或不公正的社会影响。在这项工作中,我们解决了修复神经网络的问题,以了解公平和缺乏后门等理想特性。目的是构建一个神经网络,该神经网络通过(微小)调整给定神经网络的参数(即权重)来满足该属性。具体来说,我们建议护理(\ textbf {ca}基于用途的\ textbf {re}对),一种基于因果关系的神经网络维修技术,1)执行基于因果关系的故障本地化,以识别“有罪”神经元和2)优化确定的神经元的参数减少了不当行为。我们已经对各种任务进行了经验评估,例如后门去除,神经网络维修的公平性和安全性能。我们的实验结果表明,护理能够有效地修复所有神经网络。对于公平维修任务,Care成功地将公平性提高了61.91美元\%$。对于后门删除任务,CARE将攻击成功率从$ 98 \%$降低到小于$ 1 \%$。对于安全物业维修任务,CARE将财产违规率降低到$ 1 \%$。结果还表明,由于基于因果关系的故障定位,CARE的维修重点关注不当行为并保留神经网络的准确性。
translated by 谷歌翻译
在Crypto 2019中,Gohr进行了开创性的尝试,并成功地向NSA块密码SPECK32 / 64进行了深度学习,实现了比纯差分区分的更高的精度。通过其本质,数据中的挖掘有效特征在数据驱动的深度学习中起着至关重要的作用。在本文中,除了从密文对的训练数据中考虑信息的完整性,还考虑了关于差分密码分析结构的域知识也被认为是深度学习的培训过程,提高性能。此外,基于SAT / SMT求解器,我们发现其他高概率兼容差分特性,与以前的工作相比有效地提高了性能。我们建立针对西蒙和Simeck的神经区别师(NDS)和相关关键的神经区别SIMON32 / 64的ND和RKND分别达到11-,11轮,精度分别为59.55%和97.90%。对于Simon64 / 128,ND在13轮达到60.32%的准确性,而RKND为95.49%。对于SIMECK32 / 64,获得11-,14轮的ND和RKND,分别达到63.32%和87.06%的准确度。我们为SIMECK64 / 128建立了17轮ND和21轮RKND,精度分别为64.24%和62.96%。目前,这些是Simon32 / 64,Simon64 / 128,Simeck32 / 64和Simeck64 / 128的更高精度的最长(相关关键)的神经区别。
translated by 谷歌翻译
细粒度命名实体键入(FG-NET)旨在根据上下文将实体提及的实体提及到广泛的实体类型(通常数百个)。虽然遥远的监督是获取监督培训数据的最常见方法,但它带来了标签噪声,因为它将类型标签分配给实体提及,而不论提及背景如何。为了处理标签噪声,对FG-NET的领先研究假设,细颗粒的实体键入数据具有欧几里得性质,这限制了现有模型在打击标签噪声方面的能力。鉴于细粒型层次结构表现出层次结构的事实,它使双曲线空间成为对FG-NET数据进行建模的自然选择。在这项研究中,我们提出了FGNET-RH,这是一个新颖的框架,该框架从双曲线几何形状与图形结构结合使用,以表现性能增强的方式执行实体打字。 FGNET-RH最初使用LSTM网络与其上下文相关的提及,后来形成了一个图形来提炼/完善双曲线空间中的提及编码。最后,精制的提及编码用于实体键入。使用不同基准数据集的实验表明,就严格的准确性而言,FGNET-RH将FGNET-RH提高了FG-NET的性能高达3.5%。
translated by 谷歌翻译
Patients take care of what their teeth will be like after the orthodontics. Orthodontists usually describe the expectation movement based on the original smile images, which is unconvincing. The growth of deep-learning generative models change this situation. It can visualize the outcome of orthodontic treatment and help patients foresee their future teeth and facial appearance. While previous studies mainly focus on 2D or 3D virtual treatment outcome (VTO) at a profile level, the problem of simulating treatment outcome at a frontal facial image is poorly explored. In this paper, we build an efficient and accurate system for simulating virtual teeth alignment effects in a frontal facial image. Our system takes a frontal face image of a patient with visible malpositioned teeth and the patient's 3D scanned teeth model as input, and progressively generates the visual results of the patient's teeth given the specific orthodontics planning steps from the doctor (i.e., the specification of translations and rotations of individual tooth). We design a multi-modal encoder-decoder based generative model to synthesize identity-preserving frontal facial images with aligned teeth. In addition, the original image color information is used to optimize the orthodontic outcomes, making the results more natural. We conduct extensive qualitative and clinical experiments and also a pilot study to validate our method.
translated by 谷歌翻译